Towards A Discipline of Dynamic Programming

نویسنده

  • Robert Giegerich
چکیده

Dynamic programming is a classic programming technique, applicable in a wide variety of domains, like stochastic systems analysis, operations research, combinatorics of discrete structures, flow problems, parsing ambiguous languages, or biosequence analysis. Yet, heretofore no methodology was available guiding the design of such algorithms. The matrix recurrences that typically describe a dynamic programming algorithm are difficult to construct, error-prone to implement, and almost impossible to debug. This article introduces an algebraic style of dynamic programming over sequence data. We define its formal framework including a formalization of Bellman’s principle. We suggest a language for algorithm design on a convenient level of abstraction. We outline three ways of implementation, including an embedding in a lazy functional language. The workings of the new method are illustrated by a series of examples from diverse areas of computer science. 1 The power and scope of dynamic programming 1.1 Dynamic programming: a world without rules? Dynamic programming (DP) is one of the classic programming paradigms, introduced even before the term Computer Science was firmly established. When applicable, DP often allows to solve combinatorial optimization problems over a search space of exponential size in polynomial space and time. Bellman’s “Principle of Optimality” [Bel57] belongs to the core knowledge we expect from every computer science graduate. Significant work has gone into formally characterizing this principle [Mor82,Mit64], formulating DP in different programming paradigms [Moo99,Cur97] and studying its relation to other general programming methods such as greedy algorithms [BM93]. The scope of DP is enormous. Much of the early work was done in the area of physical state transition systems and operations research [BD62]. Other, simpler examples (more suited for computer science textbooks) are optimal matrix chain multiplication, polygon triangulation, or string comparison. The analysis of molecular sequence data has fostered increased interest in DP. Protein homology search, RNA structure prediction, gene finding, and discovery of regulatory signals in RNA pose string processing problems in unprecedenced variety and data volume. A recent textbook in biosequence analysis [DEKM98] lists 11 applications of DP in its introductory chapter, and many more in the sequel. Developing a DP algorithm for an optimization problem over a nontrivial domain has intrinsic difficulties. The choice of objective function and search space are interdependent, and closely tied up with questions of efficiency. Once completed, all DP algorithms are expressed via recurrence relations between tables holding intermediate results. These recurrences provide a very low level of abstraction, and subscript errors are a major nuisance even in published articles. The recurrences are difficult to explain, painful to implement, and almost impossible to debug: A subtle error gives rise to a suboptimal solution every now and then, which can hardly be detected by human inspection. In this situation it is remarkable that neither the literature cited above, nor computer science textbooks [CLR90,Gus97,Meh84,BB88,AHU83,Sed89] provide guidance in the development of DP algorithms. 1.2 The promises of Algebraic Dynamic Programming Algebraic dynamic programming (ADP) is a new style of dynamic programming and a method for algorithm development, designed to alleviate this situation. It allows to design, reason about, tune and even test DP algorithms on a more abstract level. This is achieved by a restructuring of concerns: Any DP algorithm evaluates a search space of candidate solutions under a scoring scheme and an objective function. The classic DP recurrences reflect the three aspects of search space construction, scoring and choice, and efficiency in an indiscriminable fashion. In the new algebraic approach, these concerns are separated. The search space is described by a yield grammar, which is a tree grammar generating a string language. The ADP developer takes the view that for a given input sequence, “first” the search space is constructed, leading to an enumeration of all candidate solutions. This is a parsing problem, solved by a standard device called a tabulating yield parser. The developer can concentrate on the design of the grammar. Evaluation and choice are captured by an evaluation algebra. It is important (and in contrast to traditional presentations of DP algorithms) that this algebra comprises all aspects relevant to the intended objective of optimization, but is independent of the description of the search space. The ADP developer takes the view that a “second” phase evaluates the candidates enumerated by the first phase, and makes choices according to some optimality criterion. Of course, the interleaving of search space construction and evaluation is essential to prevent combinatorial explosion. It is contributed by the ADP method in a way transparent to the developer. By the separate description of search space and evaluation, ADP also produces modular and therefore re-usable algorithm components. More complex problems can be approached with better chance of success, and there is no loss of efficiency compared to ad-hoc approaches. The relief from formulating explicit recurrences brings about a boost in programming productivity, captured by practitioners in the slogan “No subscripts, no errors!”. The ADP approach has emerged recently in the context of biosequence analysis, but it pertains to dynamic programming over sequential data in general. “Sequential data” does not mean we only study string problems – a chain of matrices to be multiplied, for example, is sequential input data in our sense. An informal introduction, written towards the needs of the bioinformatics community, has appeared in [Gie00]. The present article gives a complete account of the foundations of the ADP method, and, almost in the style of a tutorial, shows its application to several classic combinatorial optimization problems in computer science. Like any methodical work, this article suffers from the dilemma that for the sake of exposition, the problems solved have to be rather simple, such that the impression may arise that methodical guidance is not really required. The ADP method has been applied to several nontrivial problems in the field of biosequence analysis. An early application is a program for aligning recombinant DNA [GKW99], when the ADP theory was just about to emerge. Two recent applications are searching for sequence/structure motifs in DNA or RNA [MG02], and the problem of folding saturated RNA secondary structures, posed by Zuker and Sankoff in [ZS84] and solved in [EG01]. 1.3 Overview of this article In Section 2, we shall review some new and some well known applications of dynamic programming over sequence data, in the form in which they are traditionally presented. This provides a common basis for the subsequent discussion. By the choice of examples, we illustrate the scope of dynamic programming to a certain extent. In particular, we show that (single) sequence analysis and (pairwise) sequence comparison are essentially the same kind of problem when viewed on a more abstract level. The applications studied here will later be reformulated in the style and notation of ADP. In Section 3 we introduce the formal basis of the algebraic dynamic programming method: Yield grammars and evaluation algebras. We shall argue that these two concepts precisely catch the essence of dynamic programming, at least when applied to sequence data. Furthermore, we introduce a special notation for expressing ADP algorithms. Using this notation an algorithm is completely described on a very abstract level, and can be designed and analysed irrespective of how it is eventually implemented. We discuss efficiency analysis and point to other work concerning techniques to improve efficiency. In Section 4 we formulate yield grammars and evaluation algebras for the applications described in Section 2. Moreover we show how problem variations can be expressed transparently using the ADP approach. Section 5 indicates three ways of actually implementing an algorithm once it is written in ADP notation: The first alternative is direct embedding and execution in a functional programming language, the second is manual translation to the abstraction level of an imperative programming language. The third alternative, still under development, is the use of a system which directly compiles ADP notation into C code. In the conclusion, we discuss the merits of the method presented here, evaluate its scope, and glance on possible extensions of the approach. 2 Dynamic programming, traditional style In this section we discuss four introductory examples of dynamic programming, solved by recurrences in the traditional style. They will be reformulated in algebraic style in later sections. We begin our series of examples with an algorithmic fable. 2.1 Calif El Mamun’s caravan Once upon a time around the year 800, Calif El Mamun of Bagdad, a son of Harun al Raschid, decided to take his two sons on their first hadj to Mecca. He called for the camel dealer, in order to compose a little caravan for the three pilgrims. Each one needed one riding camel, plus three further ones for baggage and water supplies. The price for the former was five tubes of oil, whereas a baggage camel was to be somewhat cheaper. After a long afternoon of bargaining, the following bill was written in the sand of the court yard: bill1 = (1 + 2) ∗ (3 ∗ 4 + 5) (1) The dealer explained that the (1 + 2) stood for the calif and his two sons, and there were three baggage camels (4 tubes) and one riding camel (5 tubes) for each pilgrim. Unfortunately, before the total price could be calculated (which took a little longer than today in those days), everyone had to turn away for the evening prayers. When they returned to business, they found that the wind had erased the parentheses, leaving only numbers and operators: bill2 = 1 + 2 ∗ 3 ∗ 4 + 5 (2) Such selective erasing seemed mysterious. The Imam was called in. He explained that Allah, while agreeing with the basic constituents of the formula, had been displeased by the placement of the parentheses, which should therefore be re-considered. The camel dealer helpfully jumped up to redraw the formula: bill3 = (1 + 2) ∗ 3 ∗ (4 + 5) (3) The dealer argued that now the parentheses showed beautiful symmetry, pleasant to the eye of any higher being. El Mamun was reluctant to agree. Although not good at numbers, he could not help to suspect that the tour to Mecca had become much more expensive. He called for the scientist. Al Chwarizmi, a famous mathematician already, was a wise as well as a cautious man. After giving the problem serious consideration, he spoke: “Honorable Calif, in his infinite wisdom Allah has provided 14 ways to evaluate this formula, with many different outcomes. For example, (1 + 2) ∗ (3 ∗ (4 + 5)) = 81, while (1 + (2 ∗ (3 ∗ 4)) + 5) = 30. An intermediate outcome would be (1 + 2) ∗ ((3 ∗ 4) + 5) = 51, just another example. As all readings are equal in Allah’s eyes, the fate of a good muslim will not depend on the reading adopted, but on the beneficial effects achieved by the choice.” Diplomat he was, with this answer Al Chwarizmi hoped to have passed responsibility back to the Imam. However, Calif El Mamun could think of beneficial effects himself. He contemplated these words of wisdom over night, and in the morning he ruled as follows : 1. The fraudulent camel dealer was to be buried in the sand, next to the formula (1 + 2) ∗ 3 ∗ (4 + 5). 2. The Calif would take possession of all the camels in the dealers stock. 3. Al Chwarizmi was granted 51 tubes of oil for a long-term research project. The task was to find a general method to redraw the parentheses in a formula such that the outcome was either minimized or maximized – depending on whether the Calif was on the buying or on the selling side. 4. Until this research was complete, in any case where parentheses were lacking, multiplication should take priority over addition. Today, we can still observe the outcome of this episode: El Mamun became a very, very rich man, and his name gave rise to the word ”mammon”, present in many modern languages and meaning an awful lot of money. Al Chwarizmi did not really solve the problem in full generality, since DP was only developed later in the 1950s by Richard Bellman. But by working on it, he made many important discoveries in elementary arithmetic, and thus he became the father of algorithmics. As a consequence of the problem remaining unsolved, until today multiplication takes priority over addition. This ends our fable. We now provide a DP solution for El Mamun’s problem. Clearly, explicit parentheses add some internal structure to a sequence of numbers and operators. They tell us how subexpressions are grouped together – which are sums, and which are products. Let us number the positions in the text t representing the formula: t = 0 1 1 + 2 2 3 ∗ 4 3 5 ∗ 6 4 7 + 8 5 9 (4) such that we can refer to substrings by index pairs: t(0, 9) is the complete string t, and t(2, 5) is 2 ∗3. A substring t(i, j) that forms an expression can, in general, be evaluated in many different ways, and we shall record the best value for t(i, j) in a table entry T (i, j). Since addition and multiplication are strictly monotone functions on positive numbers, an overall value (x + y) or (x ∗ y) can only be maximal if both subexpressions x and y are evaluated to their maximal values. So it in fact suffices to record the maximum in each entry. This is our first use of Bellman’s Principle, to be formalized later. More precisely, we define T (i, i + 1) = n, if t(i, i + 1) = n (5) T (i, j) = max{T (i, k)⊗ T (k + 1, j)|i < k < j, t(k, k + 1) = ⊗} (6) where ⊗ is either + or ∗. Beginning with the shortest subwords of t, we can compute successively all defined table entries. In T (0, 9) we obtain the maximal possible value overall. If, together with T (i, j), we also record the position k that leads to the optimal value, then we can reconstruct the reading of the formula that yields the optimal value. It is clear that El Mamun’s minimization problem is solved by simply replacing max by min. Figure 1 gives the results for maximization and minimization of El Mamun’s bill. 0 1 2 3 4 5 6 7 8 9 0 / (1,1) (3,3) (7,9) (25,36) (30,81) 1 / / 2 / / / (2,2) (6,6) (24,24) (29,54) 3 / / / / 4 / / / / / (3,3) (12,12) (17,27) 5 / / / / / / 6 / / / / / / / (4,4) (9,9) 7 / / / / / / / / 8 / / / / / / / / / (5,5) 9 / / / / / / / / / / Fig. 1. Results for the maximization and minimization of El Mamun’s bill denoted as tuple (x, y) where x is the minimal value and y the maximal value. Note that we have left open a few technical points: We have not provided explicit code to compute the table T , which is actually triangular, since i is always smaller than j. Such code has to deal with the fact that an entry remains undefined when t(i, j) is not a syntactically valid expression, like t(1, 4) = “+ 2 *”. In fact, there are about as many undefined entries as there are defined ones, and we may call this a case of sparse dynamic programming and search for a more clever form of tabulation. Another open point is the possibility of malformed input, like the non-expression “1 + ∗ 2”. The implementation shown later will take care of all these cases. Exercise 1 Al Chwarizmi remarked that there were 14 different ways to evaluate the bill. Develop a dynamic programming algorithm to compute the number of alternative interpretations for a parenthesis-free formula.2 The solution to Exercise 1 closely follows the recurrences just developed, except that there is no maximization or minimization involved. This one is a combinatorial counting problem. Although DP is commonly associated with optimization problems, we see that its scope is actually wider. 2.2 Matrix chain multiplication A classic dynamic programming example is the matrix chain multiplication problem [CLR90]. Given a chain of matrices A1, ..., An, find an optimal placement of parentheses for computing the product A1∗...∗An. The placement of parentheses can dramatically reduce the number of scalar multiplications needed. Consider three matrices A1, A2, A3 with dimensions 10×100, 100×5 and 5×50. Multiplication of (A1∗A2)∗A3 needs 10∗100∗5+10∗5∗50 = 7500 scalar multiplications, in contrast to 10 ∗ 100 ∗ 50 + 100 ∗ 5 ∗ 50 = 75000 when choosing A1 ∗ (A2 ∗A3). Let M be a n× n table. Table entry M(i, j) shall hold the minimal number of multiplications needed to calculate Ai ∗ ... ∗Aj. Compared to the previous example, the construction of the search space is a lot easier here since it does not depend on the structure of the input sequence but only on its length. M(i, j) = 0 for i = j. In any other case there exist j−i possible splittings of the matrix chain Ai, ..., Aj into two parts (Ai, ..., Ak) and (Ak+1, ..., Aj). Let (ri, ci) be the dimension of matrix Ai. Multiplying the two partial product matrices requires rickcj operations. Again we observe Bellman’s principle. Only if the partial products have been arranged internally in an optimal fashion, this product can minimize scalar multiplications overall. We order table calculation by increasing subchain length, such that we can look up all the M(i, k) and M(k + 1, j) when needed for computing M(i, j). This leads to the following matrix recurrence: for j = 1 to n do (7) for i = j to 1 do M(i, j) = { 0 for i = j min{M(i, k) + M(k + 1, j) + rickcj | i ≤ k < j} for i < j Minimization over all possible splittings gives the optimal value for M(i, j). This example demonstrates that dynamic programming over sequence data is not necessarily limited to (character) strings, but can also be used with sequences of other types, in this case pairs of numeric values. Exercise 2 Design an algorithm to minimize the size (a) of the largest intermediate matrix that needs to be computed, and (b) of the overall storage needed at any point for intermediate matrices during the computation of the chain product.2 The first part of Exercise 2 closely follows the recurrences developed above, while the latter includes optimizing the evaluation order, and is more complicated. 2.3 Global and local similarity of strings We continue our series of examples by looking at the comparison of strings. The measurement of similarity or distance of two strings is an important operation applied in several fields, for example spelling correction, textual database retrieval, speech recognition, coding theory or molecular biology. A common formalization is the string edit model. We measure the similarity of two strings by scoring the different sequences of character deletions (denoted by D), character insertions (denoted by I) and character replacements (denoted by R) that transform one string into the other. If a character is unchanged, we formally model this as a replacement by itself. Thus, an edit operation is applied at each position. Figure 2 shows some possibilities to transform the string MISSISSIPPI into the string SASSAFRAS visualized as alignment. MISSI--SSIPPI MISSISSIPPIMISSI---SSIPPI SASSAFRAS------SASSAFRAS SASSAFRAS----RR RIIR DDDD DDD R RRRRI RR RIII DDDDD Fig. 2. Three out of many possible ways to transform the string MISSISSIPPI into the string SASSAFRAS. Only deletions, insertions, and proper replacements are marked. A similarity scoring function δ associates a similarity score of 0 with two empty strings, a positive score with two characters that are considered similar, a negative score with two characters that are considered dissimilar. Insertions and deletions also receive negative scores. For strings x of length m and y of length n, we compute the similarity matrix Em,n such that E(i, j) holds the similarity score for the prefixes x1, . . . , xi and y1, . . . , yj . E(m, n) holds the overall similarity value of x and y. E is calculated by the following recurrences: E(0, 0) = 0 (8) for i = 0 to m− 1 do E(i + 1, 0) = E(i, 0) + δ(D(xi+1)) (9) for j = 0 to n− 1 do E(0, j + 1) = E(0, j) + δ(I(yj+1)) (10) for i = 0 to m− 1 do for j = 0 to n− 1 do E(i + 1, j + 1) = max   E(i, j + 1) + δ(D(xi+1)) E(i + 1, j) + δ(I(yj+1)) E(i, j) + δ(R(xi+1, yj+1))   (11) return E(m, n) (12) The space and time efficiency of these recurrences is O(mn). Often, rather than computing the (global) similarity of two strings, it is required to search for local similarities within two strings. In molecular sequence analysis, we study DNA sequences, given as strings from four types of nucleotides, or Protein sequences, given as strings from twenty types of amino acids. In DNA, we often have long non-coding regions and small coding regions. If two coding regions are similar, this does not imply that the sequences have a large global similarity. If we investigate a protein with unknown function, we are interested in finding a ‘similar’ protein with known biological function. In this situation the functionally important sequence parts must be similar while the rest is arbitrary. Local similarity asks for the best match of a subword of x with a subword of y. The Smith and Waterman algorithm [SW81] needs O(mn) time and space to solve this problem. We compute a matrix Cm,n where the entry (i, j) contains the best score for all pairs of suffixes of x1 . . . xi and y1 . . . yj. C(i, j) = max{score(x′, y′)|x′ suffix of x1 . . . xi and y′ suffix of y1 . . . yj} (13) Since we look for a local property, it must be possible to match arbitrary subwords of x and y without scoring their dissimilar prefixes and suffixes. In order to reject prefixes with negative scores, all we need to change in comparison to the recurrences for global similarity (see Equations 8 – 12) is to fix the first line and column to Zero-values and to add a Zero-value case in the calculation of the entry C(i + 1, j + 1). This Zero amounts to an empty prefix joining the competition for the best match at each point (i, j). This leads to the following recurrences: If i = 0 or j = 0, then C(i, j) = 0. (14) Otherwise, C(i + 1, j + 1) = max   0 C(i, j + 1) + δ(D(xi+1)) C(i + 1, j) + δ(I(yj+1)) C(i, j) + δ(R(xi+1, yj+1))   (15) return maxi,j C(i, j) (16) Equation 16 performs another traversal of table C to obtain the highest score overall. Exercise 3 Add appropriate for-loops to Equations 14 16.2 Exercise 4 Remove maximization in Equation 16. Use instead another table Dm,n, such that D(i, j) = maxl≤i,k≤jC(i, j). Can we compute D within the for-loops controlling Equation 15? 2 2.4 Fibonacci numbers and the case of recursion versus tabulation In this last introductory example, we make our first deviation from the traditional view of dynamic programming. There are many simple cases where the principles of DP are applied without explicit notice. Fibonacci numbers are a famous case in point. They are defined by F (1) = 1 (17) F (2) = 1 (18) F (i + 2) = F (i + 1) + F (i) (19) Fibonacci numbers may seem atypical as a DP problem, because there is no optimization criterion. We have already seen (cf. Exercise 1) that optimization is an important, but not an essential part of the dynamic programming paradigm. Every student of computer science knows that computing F as a recursive function is very inefficient – it takes exactly 2F (n) − 1 calls to F to compute F (n). Although we really need only the n values F (1) through F (n − 1) when computing F (n), each value F (n−k) is calculated not once, but F (k+1) times. The textbook remedy to this inefficiency is strengthening the recursion – define F (n) = Fib(0, 1, 1), where (20) Fib(a, b, i) = if (i = n) then b else Fib(b, a + b, i + 1) (21) Here we shall consider another solution. This one requires no redefinition of F at all, just a change of data type: Consider F as an integer array, whose elements are defined via Equations 17 – 19. In a data driven programming language, its elements will be calculated once when first needed. In an imperative language, since F is data now rather than a function, we need to add explicit control structure – an upper bound for n and a for-loop to actually calculate the array elements in appropriate order. The lesson here is the observation that a table (matrix, array) over some index domain, defined by recurrences, is mathematically equivalent to a recursive function defined by the very same recurrences. This gives us a first guiding rule for the systematic development of DP algorithms: Think of a DP algorithm as a family of recursive functions over some index domain. Don’t worry about tabulation and evaluation order, this can always be added when the design has stabilized. Exercise 5 Show how this discussion applies to the local similarity problem. Implement table C as a recursive function, and use another function D for the last equation. Does function D require tabulation to be efficient?2 3 Foundations of the algebraic approach to dynamic programming 3.1 Basic terminology Alphabets. An alphabet A is a finite set of symbols. Sequences of symbols are called strings. ε denotes the empty string, A = A, A = {ax|a ∈ A, x ∈ A}, A+ = ⋃ n≥1A, A∗ = A ∪ {ε}. Signatures and algebras. A (single-sorted) signature Σ over some alphabet A consists of a sort symbol S together with a family of operators. Each operator o has a fixed arity o : s1...sko → S, where each si is either S or A. A Σ-algebra I over A, also called an interpretation, is a set SI of values together with a function oI for each operator o. Each oI has type oI : (s1)I ...(sko)I → SI where AI = A. A term algebra TΣ arises by interpreting the operators in Σ as constructors, building bigger terms from smaller ones. When variables from a set V can take the place of arguments to constructors, we speak of a term algebra with variables, TΣ(V ), with V ⊂ TΣ(V ). By convention, operator names are capitalized in the term algebra. Tree grammars. Terms will be viewed as rooted, ordered, node-labeled trees in the obvious way. All inner nodes carry (non-nullary) operators from Σ, while leaf nodes carry nullary operators or symbols fromA. A term/tree with variables is called a tree pattern. A tree containing a designated occurrence of a subtree t is denoted C[...t...]. A tree language over Σ is a subset of TΣ . Tree languages are described by tree grammars, which can be defined in analogy to the Chomsky hierarchy of string grammars. Here we use regular tree grammars originally studied in [Bra69]. In [GS88] they were redefined to specify term languages over some signature. Our further specialization so far lies solely in the distinguished role of A. Definition 1 (Tree grammar over Σ and A.) A (regular) tree grammar G over Σ and A is given by – a set V of nonterminal symbols – a designated nonterminal symbol Ax called the axiom – a set P of productions of the form v → t, where v ∈ V and t ∈ TΣ(V ) The derivation relation for tree grammars is →∗, with C[...v...] → C[...t...] if v → t ∈ P . The language of v ∈ V is L(v) = {t ∈ TΣ|v →∗ t}, the language of G is L(G) = L(Ax).2 For convenience, we add a lexical level to the grammar concept, allowing strings from A∗ in place of single symbols. By convention, achar denotes an arbitrary character, char c a specific character c, string an arbitrary string and empty the empty input. Also for brevity, we allow syntactic conditions associated with righthand sides.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A dynamic programming approach for solving nonlinear knapsack problems

Nonlinear Knapsack Problems (NKP) are the alternative formulation for the multiple-choice knapsack problems. A powerful approach for solving NKP is dynamic programming which may obtain the global op-timal solution even in the case of discrete solution space for these problems. Despite the power of this solu-tion approach, it computationally performs very slowly when the solution space of the pr...

متن کامل

A dynamic bi-objective model for after disaster blood supply chain network design; a robust possibilistic programming approach

Health service management plays a crucial role in human life. Blood related operations are considered as one of the important components of the health services. This paper presents a bi-objective mixed integer linear programming model for dynamic location-allocation of blood facilities that integrates strategic and tactical decisions. Due to the epistemic uncertain nature of ...

متن کامل

Stochastic Dynamic Programming with Markov Chains for Optimal Sustainable Control of the Forest Sector with Continuous Cover Forestry

We present a stochastic dynamic programming approach with Markov chains for optimal control of the forest sector. The forest is managed via continuous cover forestry and the complete system is sustainable. Forest industry production, logistic solutions and harvest levels are optimized based on the sequentially revealed states of the markets. Adaptive full system optimization is necessary for co...

متن کامل

Bankruptcy Prediction: Dynamic Geometric Genetic Programming (DGGP) Approach

 In this paper, a new Dynamic Geometric Genetic Programming (DGGP) technique is applied to empirical analysis of financial ratios and bankruptcy prediction. Financial ratios are indeed desirable for prediction of corporate bankruptcy and identification of firms’ impending failure for investors, creditors, borrowing firms, and governments. By the time, several methods have been attempted in...

متن کامل

A Multi-Stage Single-Machine Replacement Strategy Using Stochastic Dynamic Programming

In this paper, the single machine replacement problem is being modeled into the frameworks of stochastic dynamic programming and control threshold policy, where some properties of the optimal values of the control thresholds are derived. Using these properties and by minimizing a cost function, the optimal values of two control thresholds for the time between productions of two successive nonco...

متن کامل

Modern Computational Applications of Dynamic Programming

Computational dynamic programming, while of some use for situations typically encountered in industrial and systems engineering, has proved to be of much greater significance in many areas of computer science. We review some of these applications here.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2002